skip to main content


Search for: All records

Creators/Authors contains: "Beaty, Roger E."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Complex cognitive processes, like creative thinking, rely on interactions among multiple neurocognitive processes to generate effective and innovative behaviors on demand, for which the brain’s connector hubs play a crucial role. However, the unique contribution of specific hub sets to creative thinking is unknown. Employing three functional magnetic resonance imaging datasets (total N = 1,911), we demonstrate that connector hub sets are organized in a hierarchical manner based on diversity, with “control-default hubs”—which combine regions from the frontoparietal control and default mode networks—positioned at the apex. Specifically, control-default hubs exhibit the most diverse resting-state connectivity profiles and play the most substantial role in facilitating interactions between regions with dissimilar neurocognitive functions, a phenomenon we refer to as “diverse functional interaction”. Critically, we found that the involvement of control-default hubs in facilitating diverse functional interaction robustly relates to creativity, explaining both task-induced functional connectivity changes and individual creative performance. Our findings suggest that control-default hubs drive diverse functional interaction in the brain, enabling complex cognition, including creative thinking. We thus uncover a biologically plausible explanation that further elucidates the widely reported contributions of certain frontoparietal control and default mode network regions in creativity studies.

     
    more » « less
  2. Abstract

    The visual modality is central to both reception and expression of human creativity. Creativity assessment paradigms, such as structured drawing tasks Barbot (2018), seek to characterize this key modality of creative ideation. However, visual creativity assessment paradigms often rely on cohorts of expert or naïve raters to gauge the level of creativity of the outputs. This comes at the cost of substantial human investment in both time and labor. To address these issues, recent work has leveraged the power of machine learning techniques to automatically extract creativity scores in the verbal domain (e.g., SemDis; Beaty & Johnson 2021). Yet, a comparably well-vetted solution for the assessment of visual creativity is missing. Here, we introduce AuDrA – an Automated Drawing Assessment platform to extract visual creativity scores from simple drawing productions. Using a collection of line drawings and human creativity ratings, we trained AuDrA and tested its generalizability to untrained drawing sets, raters, and tasks. Across four datasets, nearly 60 raters, and over 13,000 drawings, we found AuDrA scores to be highly correlated with human creativity ratings for new drawings on the same drawing task (r= .65 to .81; mean = .76). Importantly, correlations between AuDrA scores and human raters surpassed those between drawings’ elaboration (i.e., ink on the page) and human creativity raters, suggesting that AuDrA is sensitive to features of drawings beyond simple degree of complexity. We discuss future directions, limitations, and link the trained AuDrA model and a tutorial (https://osf.io/kqn9v/) to enable researchers to efficiently assess new drawings.

     
    more » « less
  3. Free, publicly-accessible full text available July 1, 2024
  4. Free, publicly-accessible full text available August 1, 2024
  5. Free, publicly-accessible full text available July 4, 2024
  6. Free, publicly-accessible full text available July 20, 2024
  7. Free, publicly-accessible full text available June 1, 2024
  8. Free, publicly-accessible full text available June 1, 2024
  9. Semantic distance scoring provides an attractive alternative to other scoring approaches for responses in creative thinking tasks. In addition, evidence in support of semantic distance scoring has increased over the last few years. In one recent approach, it has been proposed to combine multiple semantic spaces to better balance the idiosyncratic influences of each space. Thereby, final semantic distance scores for each response are represented by a composite or factor score. However, semantic spaces are not necessarily equally weighted in mean scores, and the usage of factor scores requires high levels of factor determinacy (i.e., the correlation between estimates and true factor scores). Hence, in this work, we examined the weighting underlying mean scores, mean scores of standardized variables, factor loadings, weights that maximize reliability, and equally effective weights on common verbal creative thinking tasks. Both empirical and simulated factor determinacy, as well as Gilmer-Feldt’s composite reliability, were mostly good to excellent (i.e., > .80) across two task types (Alternate Uses and Creative Word Association), eight samples of data, and all weighting approaches. Person-level validity findings were further highly comparable across weighting approaches. Observed nuances and challenges of different weightings and the question of using composites vs. factor scores are thoroughly provided. 
    more » « less